Published on

vSphere-Install-Configure-Manage

Table of Contents

References

vSphere Installation and Setup

vCenter Server and Host Management


Module 2: Introduction to Software-Defined Data Center

---Overview of vSphere and VMs---

  • vSphere

    • Virtualization
    • Virtualization of Resources
    • vSphere UI
    • ESXi
  • Terminology

    • OS -> Software designed to allocate physical resources to applications

      • Resources such as RAM, CPUs, etc
      • Windows, Linux, OSX
    • Application -> Software that runs on an OS, consuming physical resources - IMPORTANT

      • Word, Chrome, etc
    • VM -> Specialized application that abstracts hardware resources into software

      • Runs a guest OS
    • Hypervisor -> Specialized OS designed to run VMs

      • ESXi (Type 1 - Bare metal), Workstation, Fusion
    • Host -> Physical computer that provides the resources to the hypervisor

    • vSphere -> Server product. Combines ESXi and vCenter (Server Management Platform - Multiple ESXi hosts)

    • Cluster -> Group of ESXi hosts whose aggregate resources are shared by VMs (Max 64 ESXi hosts)

    • vSphere vMotion -> Service that supports migration of powered-on VMs from host to host without service interruption

      • Don't need a cluster (Not necessarily a cluster), just a couple ESXi hosts
    • vSphere HA (High Avaliability) -> Cluster feature that protects against hardware failures by automatically restarting VMs on unaffected hosts

    • vSphere DRS (Distributed Resource Scheduler) -> Cluster feature that uses vSphere vMotion to place VMs on hosts and ensure that each VM recieves the resources it needs

      • Load balancing VMs

About VMs

A VM is a software representation of a physical computer + its components. Virtualization software converts the physical machine + components into files

  • Components
    • Guest OS
    • VMware Tools
    • Virtual Resources
      • CPU + Memory
      • Network adapters
      • Disks + controllers
      • Parallel and serial ports

Benefits of a VM

Physical MachineVirtual Machine
Difficult to move or copyEasy to move or copy
Bound to specific hardware/componentsIndependent of specific hardware (encapsulated in files)
Short life cycleIsolated from other VMs running on same hardware
Physical intervention/contact requiredInsulated from physical hardware changes + no physical contact needed
Compromised systems if security breachedLess likely for a compromised VM to affect the hardware/other VMs

Types of Virtualization

  • Servere Virtualization
  • Network Virtualization (MSX)
  • Storage Virtualization (Hyperconverged Infrastructure)
    • Large aggregated datastore (vSAN) from all hosts
  • Desktop Virtualization (Horizon)

Software-Defined Data Center (SDDC)

  • All Infrastructure is virtualized and control of the data center is automated by software. vSphere is the foundation of SDDC
  • Cloud computing exploits efficient pooling of on-demand, self-managed, and virtual infrastructure.
  • Hybrid cloud
    • Migrate or interface between public and private cloud

VMware Skyline

  • Family
    • Skyline Health
    • Skyline Advisor
  • Predictive analysis and proactive recommendations.
  • Issue Avoidance
    • Proactively identifies potential issues based on configs, details and usage
    • Resolves issues before they occur -> Improves stability
  • Shortens resolution time
  • Personalized recommendations

---vSphere Virtualization of Resources---

  • ESXi Specs

    • Min 8gb ram
  • Portions of all resources assigned to each VM

  • Overcommitting

    • Total virtual resources more than physical resources allowed, but no one single VM can have more than total physical resources.
    • Need to be careful
    • All resources controlled by ESXi itself
    • Max ratios
      • Actual ratios will depend on the specifc situation
      • CPU -> No more than a 10:1 ratio
      • RAM -> No more than a 2:1 ratio
  • Virtual switches

    • VMs have virtual NICs, connected to a virtual network switch
    • Passthrough access to a physical NIC
  • vSphere VMFS provides distributed storage architecture. Meaning multiple ESXi hosts can read or write to shared storage concurrently

  • GPU virtualization

    • GPU graphics optimize complex graphics operations
    • Max 4 vGPU devices
    • Common use cases:
      • Rich 2 and 3D graphics
      • Horizon virtual desktops
      • Graphics intensive tasks
      • Massively parallel tasks, i.e. scientific computatiaon

---vSphere User Interfaces---

Ports and protocols

  • vSphere client:

    • Runs on vCenter Server (Compact webserver)
    • Access to VMs and hosts, storage, NATs, etc
    • HTML5-based client
  • ESXi Host webserver

    • HTML5-based client
    • Has the VMware Host Client
  • CLI

    • VMware PowerCLI
      • Automating via scripting
      • Built on PowerShell
      • 700+ cmdlets for managing and automating vSphere
    • ESXCLI
  • Most run on port 443

  • Webservers accessed at

https:ESXi_FQDN_or_IP/ui
https://vCenter_Server_Appliance_FQDN_or_IP/ui

---ESXi Overview---

  • High security

    • Host-based firefall
      • SSH disabled by default
      • Blocks all incoming and outgoing traffic except those that are explicity allowed through
    • Memory hardening -> Randomised locations
    • Kernel module integrity -> Digitally signed + bundled (VIBs) VMware Infrastructure Bundles. Verified before running
    • TPM 2.0 + UEFI Secure Boot -> Chain boot order via certificates to physical hardware. Chain of integrity
    • VM disks can be encrypted
  • Small disk footprint -> ~300mb

  • Quick boot

  • DCUI should be used for initial setup, long term use not recommended

  • System config

    • Lockdown mode -> Prevents users from directly managing ESXi except through vCenter itself
    • Config changes can also be modified from the vSphere client as well
  • Best practice user accounts

    • Root has full admin access to ESXi
    • Guidelines
      • Practice least-privilege
      • Strictly control root privileges to ESXi hosts
      • Can assign Active Directory (AD) accounts + role-based users
        • Better to use AD rather than repeated local accounts
  • Synchronised clocks Important

    • NTP (Network Time Protocol) client
    • An ESXi host can be configured as an NTP
    • Time important:
      • Performance graphs
      • Logs

Check

  • Service Catalog and Self-Service Portal -> Cloud management layer
  • vRealize Operations and vRealize Orchestrator -> Service management layer

Lab 2

  • By default, any user that is a member of the ESX Admins domain group has full administrative access to any ESXi hosts that join the domain
  • vSphere ESXi Shell is the Tech Support Mode (TSM) service: Manage -> Services -> TSM -> Enable
  • SSH is TSM-SSH: ... -> TSM-SSH -> Enable

Module 3: Virtual Machines

  • Install VMware Tools on each VM
    • Helps make it the most efficient it can be
    • Includes:
      • Drivers
      • Adapters
      • Graphics performance increase
      • Mouse performance
      • Time sync
      • VM controls

Lab 3

  • Task 1
    • Backing: ICM-Datastore Win10-Empty/win10-Empty.vmdk
    • Capacity: 12 GB
    • Thin provisioned: no

Virtual Machine Hardware Deep Dive

  • VMs are encapsulated into a set of VM files

    • Files stored in directories on a VMFS, NFS, vSAN or vSphere Virtual Volumes
    • Files include:
      • Configuration file -> VM_name.vmx
        • Plain text file
        • Details about the VM
      • Swap files -> VM_name.vswp, vmx-VM_name.vswp
        • Same size as the amount of allocated RAM
      • BIOS file -> VM_name.nvram
      • Log files -> vmware.log
      • Template config file -> VM_name.vmtx
      • Disk descriptor file -> VM_name.vmdk
      • Disk data file -> VM_name-flat.vmdk
        • If VM has 100gb storage allocated, the VM_name-flat.vmdk file will also be 100gb
      • Suspend state file -> VM_name-*.vmss
  • CPU and memory can be reconfigured

    • Shouldn't exceed the number of physical cores for any single VM
  • VMs need a storage controller

    • LSI Logic SAS might be default for newer systems, LSI Logic Parallel might be used for older
    • Large I/O interactions (>2000/sec) -> VMware Paravirtual SCSI adapter should be chosen

Storage

  • Thick-Provisioned Virtual Disk

    • Uses all allocated space at initial creation
      • Eager-zeroed thick-provisioned disk -> Every block prefilled with a zero
        • Slower
      • Lazy-zeroed thick-provisioned disk -> Every block filled with a zero when data is written to the block
  • Thin-Provisioned Virtual Disk

    • VMs use storage space as needed
    • VM always sees the full allocated disk size
    • unmap command can be used to reclaim unused space from the storage array
    • Must be careful to not overcommit storage when using a thin-provisioned virtual disk + carefully monitor
      • Reporting and alerts can help manage allocations and capacity

Virtual Networks

  • VMs usually need a virtual NAT
  • Port groups
  • Different types of adapters avaliable (Default is emulation of a Intel E1000-E1000E network adapter (10gb speed limited))
  • Recommended is the VMXNET3 adapter
  • Can dedicate a physical adapter (PCI Passthrough)
    • SR-IOV pass-through
    • vSphere DirectPath I/O
  • PVRDMA -> VM can use RDMA adapters

Virtual Machine Console

  • VMware Remote Console Application (VMRC) can be downloaded
  • VMware fusion/workstation can also be used

---Intro to containers---

  • Kubernetes now embedded into vSphere tools (vSphere with Kubernetes, TKG)

  • Benefits of building a application into microservices and placing them inside of containers

  • Container Terminology

TermDefinition
ContainerApplication packaged with dependencies
Container EngineRuntime engine that manages the containers
DockerMost recognized runtime engine
Container HostVM or physical machine on which the containers and container engine run
KubernetesGoogle-developed orchestration for containers

Containers

Encapsulated application, along with dependent binaries and libraries. Application is decoupled from the OS and becomes a serverless function

  • Make coding easier
  • Fast deployment and testing. No OS or load is required
  • Can be moved around with little hassle
  • Container uses the kernel of the host
    • Isolated by default from other containers (i.e. can only access allocated resources)
  • Containers are more lightweight than using VMs to run applications with the required binaries and libraries
VMsContainers
Entire OS encapsulatedApplication + required binaries/libraries encapsulated
Scheduled by hypervisorScheduled by container host OS
Run on the hypervisorRUn on container host OS
Starting VM is starting a whole OSStarting container can be almost instant
  • vSphere HA can restart/balance a container host OS, where Kubernetes can then load balance/restart containers

Module 4: vCenter Server

Centralized Management

  • vCenter acts as a central administration poiont for ESXi hosts and their VMs

    • Based on Photon OS + has a PostgresSQL Database + lots of services
    • vCenter requires more resources depending on the number of VMs, networks, etc it needs to manage
      • Services included:
        • vCenter Server
        • vSphere Client
        • vCenter SSO
          • vCenter must first be added to AD
          • SAML token used to grant access
        • License Service
        • VMware Certificate Authority
        • Content Library
        • vSphere ESXi Dump Collector -> For core dumps
  • Can join multiple vCenter instances into a single instance (Max 15 server instances in any single vCenter SSO domain) -> Enhanced Linked Mode (ELM)

    • ELM can be created only during deployment of vCenter Server Appliance
  • vSphere communicates directly with vCenter Server (port 443). To communicate directly with the ESXi host, use VMware Host Client. Accessed via vSphere Client

    • Lockdown mode, only vCenter Server can communicate with vSphere/ESXi
MetricvCenter 7.0
Hosts per vCenter Server2,500
Powered on VMs40,000
Registered VMs45,000
Hosts per cluster64
VMs per cluster8000

Deploying vCenter Server

  1. Verify vCenter requirements are met
  2. Get FQDN/Static IP in DNS (Both forward and reverse zones) for host you want to install on
  3. Ensure VM clocks are synced (NTP server)
  • Installer is 2 stage process

    1. Stage 1: Server Installation
    2. Stage 2: Configuration Phase
      • SSO
      • AD
      • NTP
      • Join CEIP
  • vCenter Server Appliance Management Interface (VAMI)

    • Runs of port 5480
    • Access with FQDN:5480
    • Monitor resource usage
    • Backup
    • Add additional network adapters

vSphere Licensing

  • License DB is replicated across all other instances of vSphere if in enhanced link mode
  • Significant consequences of an expired license

vCenter Roles and Permissions

  • Permissions

    • Requires:
      • Role -> A set/combination of privileges
      • User or group -> Indication of who can perform the action
      • Object -> Target of the action
    • Privilege -> An action that can be performed (i.e. power on VM)
    • Permission -> Gives a user/group a role for the selected object
  • Explicitly defined permissions take priority over any inherited permissions on the same object

  • If a user is a member of multiple groups on the same object, the union of the privileges is assigned to the user

    • If user has both Admin and no access, they will get Admin permisions
  • Global permission

    1. Global perms to all vCenter objects
    2. Perms to content library
    3. Perms to tags/categories

Backing Up and Restoring vCenter

  • File-based backup and restore -> vCenter Server Appliance Management Interface

  • Image-based backup and restore -> vSphere Storage APIs

  • Can be scheduled

  • Restoring backup, DRS needs to be disabled/manual mode or else vMotion will start auto balancing/moving VMs around and it won't be on the specified hostname

Monitoring vCenter Server

  • vCenter Server Events
    • User-action information
  • Health check runs every 15min automatically

vCenter High Avaliability

  • vCenter Server is the foundation for everything -> Primary management platform

  • vSphere HA -> Restart downed VMs/vCenter on another avaliable host

  • Active Node Failure

    • Adds another virtual NIC
    • Passive node takes over the Management Network
    • If passive node fails, vSphere HA will restart it + other options
    • If witness node fails, options will be presented

Module 5: Configuring and Managing Virtual Networks

Standard Switches

  • Avoid single points of failure

  • Virtual Switch Connections

    • VM port groups
    • VMkernel port
      • IP storage
      • vSphere vMotion
      • vSphere Fault Tolerance
      • vSAN
      • vSphere Replication
      • ESXi management network
    • Uplink ports -> Passthrough to physical NICs
  • VLAN tags

    • VLAN ID 0 -> Not using it
    • VLAN 4095 -> All VLAN traffic pass through with VLANS intact and allow VMs to deal with it themselves
    • Specify VLAN ID
  • Standard switch

    • Switch configured for a single host
  • Distributed switch

    • One switch configured for an entire data center
    • Up to 2000 hosts per switch
    • Consistent across all attached hosts
  • Both

    • Supports VLAN Segmentation, NIC Teaming and IPv6 Support

Standard Switch Policies

  • Avaliable network policies

    1. Security
      • Promiscuous mode -> Allow virtual switch/port group to forward traffic regardless of destination
      • MAC address changes -> Accept or reject inbout traffic when MAC address is altered by guest
      • Forged transmits -> Accept or reject outbound traffic when MAC address is altered by guest
    2. Traffic Shaping
      • Manage amount of traffic coming from VMs and going towards the physical NICs
      • 3 settings
        1. Average rate
        2. Peak bandwidth
        3. Burst size
    3. NIC teaming
      • Increase network capacity of a virtual switch by including two or more physical NICs in a team
      • Methods
        1. Load balancing method
          • Originating virtual port ID
            • VM outbound traffic mapped to specific physical NIC
          • Source MAC Hash
            • Use hash of the MAC to choose a adapter
        2. Network failure detection
    4. Failover
      • VMkernel can use link status or beaconing, or both, to detect a network failure
  • Policy Levels

    • Standard switch level -> Default policies for all ports
    • Port group level -> Effective policies override default policies

Module 6: Configuring and Managing Virtual Storage

  • Shared storage can be used for disaster recovery, high availability, and moving VMs between hosts

  • Datastore

    • Logical storage unit
    • Location for storing data that can be accessed by multiple ESXi hosts simultaneously
  • VMFS (Block Level)

    • Direct Attached
      • SSDs/HDDs -> ESXi internal (SATA, NVMe)
      • Single host only, not shared (local)
    • FC Fibre Channel
    • FCoE Fibre Channel over Ethernet
    • iSCSI -> Access SCSI LUNs
    • Concurrent access supported + file locking
  • NFS Network File Share

  • vSAN

    • Internally shared storage
    • Aggregated datastore shared by VMs
    • Also Direct attached, but uses a network backend
    • At least one caching disk -> Must be SSD
    • Must have at least one capacity drive
  • vSphere Virtual Volume

    • FC/Ethernet
    • Individual rules can be applied to each volume
  • Fibre Channel Addressing

    • Have WWNs World Wide Name -> Unique 64-bit address

iSCSI Storage

  • IQN -> iSCSI Qualified Name

  • Reversed DNS name

  • Runtime name -> Gives a HBA number

    • vmhbaN:C:T:L convention

VMFS Datastores

  • Can expand but not shrink a VMFS datastore

  • Max capacity of 64tb

  • Multipathing algorithms

    • Scalability
      • Round Robin
    • Availability
      • Most Recently Used
      • Fixed
  • No VMs on datastore before it can be deleted

NFS Datastores

  • Keep management and NFS networks seperate
  • Hosts should only access an NFS share with a standard version
  • No VMs on datastore before it can be unmounted

vSAN

  • Hyperconverged storage

  • Software defined storage solution

    • Minimum of three hosts to be part of the vSphere cluster and enabled for vSAN
  • Local disks on each host are pooled to create a virtual shared vSAN datastore

  • At least one flash cache device and one storage device

  • Cluster can have max 64 ESXi hosts

  • VM Storage Policies

    • Capacity
    • Availability
    • Performance

Module 7: Virtual Machine Management

Creating Templates and Clones

  • To update template

    1. Convert template to VM
    2. Make changes
    3. Convert back to template
  • Essential that every new VM is slightly different from every other one

    • Deploying from clone or template
      • Change computer name, Network settings, License, Windows Security Identifier (WSI)
  • Guest OS Customization Specifications

  • Instant clones

    • Utilises a "parent" VM -> Spawn off another VM process
    • Processor state, memory state, device state and disk state are identical to parent VM

Content Libraries

  • Central stores for OVFs, templates, etc

  • Single point for consistency

  • Live updates of templates

  • ISO direct mount

  • Types

    • Local
    • Published
    • Subscribed

Migrating VMs with vSphere vMotion

  • Only vMotion if VM is powered on, otherwise it is a cold vMotion/migration

  • Create VMkernel port with vSphere vMotion service enabled on source and destination host

  • Shared storage and VMkernel port required

  • 128 concurrent migrations possible per VMFS or NFS datastore

  • Encrypted vMotion must be supported on both the source and destination hosts

  • CPUs must be compatible, i.e. either Intel or AMD, not both

  • Cross vCenter Migration

    • Must be linked via Enhanced Linked Mode
    • Hosts must also be time Synchronised
    • MAC address compatability check
    • TCP/IP stacks
  • Long-Distance vSphere vMotion

Enhanced vMotion Compatability

  • Tries to ensure CPU compatibility between source and target hosts

  • Downgrades features to the oldest/least feature set CPU avaliable in the cluster

  • CPU ID is an API that allows a guest os to query list of avaliable extra instruction sets

    • If program doesn't use the API, VMware Enhanced Compatability won't be able to do anything
  • If all hosts are compatible with a newer EVC mode, it can simply be raised

    • Running VMs must be restarted in order to use new features
  • If lowering to have less features, it must be powered off first

Storage vMotion

  • VMware API for Command Integration -> Send commands to storage array

  • Internal or storage network transfers

  • Can combine a storage vMotion with a VM vMotion

    • Will eliminate requirement for shared storage
  • Useful for virtual infrastructure administration tasks

Creating VM Snapshots

  • A return step in case something goes wrong, a back-out plan

  • Snapshot

    • Snapshot VM configuration
    • Snapshot VM memory state (optional)
    • Snapshot virtual disks
  • Types of snapshots

    • VMFSsparse -> Virtual disks smaller than 2tb. delta.vmdk with a 512 byte block size
    • SEsparse -> sesparse.vmdk with a 4KB block size
    • vsanSparse -> Delta object with a 4MB block size
      • Write old data to delta and make changes to base disk

vSphere Replication and Backup

  • Changed block tracking (CBT) allows for faster and more efficient tracking for backing up

Module 8: Resource Management and Monitoring

Virtual CPU and Memory Concepts

  • Overcommitment, allocate more resources than physically avaliable

  • ESXi will enter a state of contention if vms try to access resources that aren't physically avaliable

  • Virtual memory swap file, vswp (same size as RAM allocation).

  • Memory overhead stored within the vmx-*.vswp file

Overcommit Techniques

  • Transparent page sharing -> Pages with identical contents stored only once

  • Ballooning -> Allows deallocating memory from one VM to another

  • Memory compression -> Tries to reclaim some memory performance when contention is high

  • Host-level SSD swapping -> Use a SSD on the ESXi host for a host cache swap

  • VM memory paging to disk -> Using VMkernel swap space is the last resort because of poor performance

  • CPU hyperthreading, execute two threads on two different cores as opposed to both running on the same core

  • VMkernel balances load sharing

Resource Controls

  • No memory guarantee by ESXi to any VM except if it is reserved (reserved up to the VM limit)

    • If VM doesn't have enough RAM to satisfy a reservation, it will not power on
  • CPU reservations measured in MHz or GHz, default is 0MHz

    • Reserved CPU is guaranteed to be immediately scheduled/used for the particular VM
    • The VM is never placed in a CPU ready state if it has a reserved CPU. CPU ready is a bad thing (waiting for a CPU)
  • Resource limits not recommended

Resource Allocation Shares

  • If a VM has e.g. twice as many shares of a resource as another VM, it is entitled to consume twice as much of that resource when these two VMs compete for resources.

  • Only looked at if contention occurs

SettingCPU Share ValuesMemory Share Values
High2000 shares per vCPU20 shares per MB of configured VM memory
Normal100010
Low5005

Resource Monitoring Tools

Performance-Tuning Methodology

  • Don't make casual changes to production systems

  • Assess performance

  • Identify limiting resource

  • Make more resources avaliable

  • Benchmark again

Monitoring Resource Use

  • CPU ready over 5% per core might mean some contention is present

  • Any evidence of ballooning or swapping is a sign that the host might be memory-constrained

  • Storage-constrained VMs -> High read/write round-time latency

  • Network-constrained VMs -> Any dropped packets

Using Alarms

  • An alarm rule must contain at least one trigger
    • Can be a condition or state trigger, or a event trigger

Module 9: vSphere Clusters

  • Group of hosts and aggregate their resources

vSphere Clusters

  • A cluster can contain up to 64 ESXi hosts

  • Automatically restarts downed VMs on another host

  • Can be managed by vSphere lifecycle manager

vSphere DRS

  • DRS -> Distributed Resource Scheduler

    • Use vMotion to ensure VMs aren't under contention
  • Act of doing vMotion has a resource/time cost (resource wise)

  • Affinity rules can keep VMs together or seperate them

  • DRS must meet vMotion requirements

  • To enter maintenance mode/standby mode, all VMs must be either off, suspended or migrated

  • vSphere DPM can optimise power management by vMotioning VMs away from certain ESXi hosts to save power

  • Dynamic DirectPath I/O -> Improves performance if a VM requires better performance from physical hardware via direct access

vSphere HA

  • Reduce planned downtime, prevent unplanned downtime and recover rapidly from outages/failures

  • Protects against

    • ESXi host failure -> Restarts failed VMs on other hosts
    • VM failure -> Restarts VM if VMware Tools heartbeat is not recieved in time
    • Application failure -> Restart VM when application heartbeat is not received within a set time
    • Datastore accessibility failure -> Restarts affected VMs on other hosts that can access the datastores
    • Network isolation -> Restart VMs if host becomes isolated on the management or vSAN network
  • Datastore Accessibility Failures

    • All paths down (APD) -> Wait 150s
      • Default do nothing

vSphere HA Architecture

  • Fault Domain Manager (FDM) starts on hosts in the cluster when vSphere HA is enabled

    • Heartbeat communications between hosts
    • Master host elected -> In-charge of cluster operations
    • Sends/recieves heartbeats
  • Isolation address -> Default gateway

Configuring vSphere HA

  • Slots
    • As large as the largest VM

    • Width -> CPU reservation

    • Height -> Memory reservation

    • Inefficient method of calculating total number of VMs

  • HA orchestrated restart -> Set restart order of VMs for VM-VM dependencies

vSphere Fault Tolerance

  • Supports VMs configured with up to 8vCPUS and 128GB memory

  • Secondary VM fires up on another host

    • If primary fails, secondary becomes primary and another secondary is spun up
  • Primary has a lock on a shared file, if secondary is able to rename that file, primary has gone offline, thus secondary should become primary


Module 10: vSphere Lifecycle Management

  • Allows for centralised, automated patches and version management

Update Planner

  • vCenter upgrade prechecks and interoperability reports
  • Must join the CEIP to generate an interoperability or precheck report

Lifecycle Manager

  • Baselines attach to individual ESXi hosts

  • ESXi upgrades through uploaded ISO images

  • ESXi updates or patches are bundled into baselines + allows for dynamic updates

  • Integrates with DRS -> lifecycle manager places hosts into maintenance mode first

Working with Baselines

  • Defaults

    • Critical Host Patches
    • Non-Critical Host Patches
    • Host Security Patches
  • Can create custom baselines

    • Fixed patch baseline -> Set of patches do not change
    • Dynamic patch baseline -> Patches meet certain criteria
    • Host extension baseline -> Additional software for ESXi hosts

Images

  • ESXi images
    • ESXi base image
    • Components -> VIBs (vSphere Installation Bundles)
    • Vendor add-ons -> Sets of components that OEMs bundle together with an ESXi base image
    • Firmware and Drivers Add-On

Module 11: vSphere v7 u1